148 research outputs found
Inferring collective dynamical states from widely unobserved systems
When assessing spatially-extended complex systems, one can rarely sample the
states of all components. We show that this spatial subsampling typically leads
to severe underestimation of the risk of instability in systems with
propagating events. We derive a subsampling-invariant estimator, and
demonstrate that it correctly infers the infectiousness of various diseases
under subsampling, making it particularly useful in countries with unreliable
case reports. In neuroscience, recordings are strongly limited by subsampling.
Here, the subsampling-invariant estimator allows to revisit two prominent
hypotheses about the brain's collective spiking dynamics:
asynchronous-irregular or critical. We identify consistently for rat, cat and
monkey a state that combines features of both and allows input to reverberate
in the network for hundreds of milliseconds. Overall, owing to its ready
applicability, the novel estimator paves the way to novel insight for the study
of spatially-extended dynamical systems.Comment: 7 pages + 12 pages supplementary information + 7 supplementary
figures. Title changed to match journal referenc
Homeostatic plasticity and external input shape neural network dynamics
In vitro and in vivo spiking activity clearly differ. Whereas networks in
vitro develop strong bursts separated by periods of very little spiking
activity, in vivo cortical networks show continuous activity. This is puzzling
considering that both networks presumably share similar single-neuron dynamics
and plasticity rules. We propose that the defining difference between in vitro
and in vivo dynamics is the strength of external input. In vitro, networks are
virtually isolated, whereas in vivo every brain area receives continuous input.
We analyze a model of spiking neurons in which the input strength, mediated by
spike rate homeostasis, determines the characteristics of the dynamical state.
In more detail, our analytical and numerical results on various network
topologies show consistently that under increasing input, homeostatic
plasticity generates distinct dynamic states, from bursting, to
close-to-critical, reverberating and irregular states. This implies that the
dynamic state of a neural network is not fixed but can readily adapt to the
input strengths. Indeed, our results match experimental spike recordings in
vitro and in vivo: the in vitro bursting behavior is consistent with a state
generated by very low network input (< 0.1%), whereas in vivo activity suggests
that on the order of 1% recorded spikes are input-driven, resulting in
reverberating dynamics. Importantly, this predicts that one can abolish the
ubiquitous bursts of in vitro preparations, and instead impose dynamics
comparable to in vivo activity by exposing the system to weak long-term
stimulation, thereby opening new paths to establish an in vivo-like assay in
vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.
Learning more by sampling less : subsampling effects are model specific
When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited.
We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms. The second model had the same interaction rules but random connectivity. The third model had local connectivity but different activity propagation rules. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units.
Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true AÂŹ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated.
Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled
Correlated microtiming deviations in jazz and rock music
Musical rhythms performed by humans typically show temporal fluctuations.
While they have been characterized in simple rhythmic tasks, it is an open
question what is the nature of temporal fluctuations, when several musicians
perform music jointly in all its natural complexity. To study such fluctuations
in over 100 original jazz and rock/pop recordings played with and without
metronome we developed a semi-automated workflow allowing the extraction of
cymbal beat onsets with millisecond precision. Analyzing the inter-beat
interval (IBI) time series revealed evidence for two long-range correlated
processes characterized by power laws in the IBI power spectral densities. One
process dominates on short timescales ( beats) and reflects microtiming
variability in the generation of single beats. The other dominates on longer
timescales and reflects slow tempo variations. Whereas the latter did not show
differences between musical genres (jazz vs. rock/pop), the process on short
timescales showed higher variability for jazz recordings, indicating that jazz
makes stronger use of microtiming fluctuations within a measure than rock/pop.
Our results elucidate principles of rhythmic performance and can inspire
algorithms for artificial music generation. By studying microtiming
fluctuations in original music recordings, we bridge the gap between
minimalistic tapping paradigms and expressive rhythmic performances
Dynamic Adaptive Computation: Tuning network states to task requirements
Neural circuits are able to perform computations under very diverse
conditions and requirements. The required computations impose clear constraints
on their fine-tuning: a rapid and maximally informative response to stimuli in
general requires decorrelated baseline neural activity. Such network dynamics
is known as asynchronous-irregular. In contrast, spatio-temporal integration of
information requires maintenance and transfer of stimulus information over
extended time periods. This can be realized at criticality, a phase transition
where correlations, sensitivity and integration time diverge. Being able to
flexibly switch, or even combine the above properties in a task-dependent
manner would present a clear functional advantage. We propose that cortex
operates in a "reverberating regime" because it is particularly favorable for
ready adaptation of computational properties to context and task. This
reverberating regime enables cortical networks to interpolate between the
asynchronous-irregular and the critical state by small changes in effective
synaptic strength or excitation-inhibition ratio. These changes directly adapt
computational properties, including sensitivity, amplification, integration
time and correlation length within the local network. We review recent
converging evidence that cortex in vivo operates in the reverberating regime,
and that various cortical areas have adapted their integration times to
processing requirements. In addition, we propose that neuromodulation enables a
fine-tuning of the network, so that local circuits can either decorrelate or
integrate, and quench or maintain their input depending on task. We argue that
this task-dependent tuning, which we call "dynamic adaptive computation",
presents a central organization principle of cortical networks and discuss
first experimental evidence.Comment: 6 pages + references, 2 figure
Bits from Biology for Computational Intelligence
Computational intelligence is broadly defined as biologically-inspired
computing. Usually, inspiration is drawn from neural systems. This article
shows how to analyze neural systems using information theory to obtain
constraints that help identify the algorithms run by such systems and the
information they represent. Algorithms and representations identified
information-theoretically may then guide the design of biologically inspired
computing systems (BICS). The material covered includes the necessary
introduction to information theory and the estimation of information theoretic
quantities from neural data. We then show how to analyze the information
encoded in a system about its environment, and also discuss recent
methodological developments on the question of how much information each agent
carries about the environment either uniquely, or redundantly or
synergistically together with others. Last, we introduce the framework of local
information dynamics, where information processing is decomposed into component
processes of information storage, transfer, and modification -- locally in
space and time. We close by discussing example applications of these measures
to neural data and other complex systems
Tailored ensembles of neural networks optimize sensitivity to stimulus statistics
The dynamic range of stimulus processing in living organisms is much larger
than a single neural network can explain. For a generic, tunable spiking
network we derive that while the dynamic range is maximal at criticality, the
interval of discriminable intensities is very similar for any network tuning
due to coalescence. Compensating coalescence enables adaptation of
discriminable intervals. Thus, we can tailor an ensemble of networks optimized
to the distribution of stimulus intensities, e.g., extending the dynamic range
arbitrarily. We discuss potential applications in machine learning.Comment: 6 pages plus supplemental materia
Description of spreading dynamics by microscopic network models and macroscopic branching processes can differ due to coalescence
Spreading processes are conventionally monitored on a macroscopic level by
counting the number of incidences over time. The spreading process can then be
modeled either on the microscopic level, assuming an underlying interaction
network, or directly on the macroscopic level, assuming that microscopic
contributions are negligible. The macroscopic characteristics of both
descriptions are commonly assumed to be identical. In this work, we show that
these characteristics of microscopic and macroscopic descriptions can be
different due to coalescence, i.e., a node being activated at the same time by
multiple sources. In particular, we consider a (microscopic) branching network
(probabilistic cellular automaton) with annealed connectivity disorder, record
the macroscopic activity, and then approximate this activity by a (macroscopic)
branching process. In this framework, we analytically calculate the effect of
coalescence on the collective dynamics. We show that coalescence leads to a
universal non-linear scaling function for the conditional expectation value of
successive network activity. This allows us to quantify the difference between
the microscopic model parameter and established macroscopic estimates. To
overcome this difference, we propose a non-linear estimator that correctly
infers the model branching parameter for all system sizes.Comment: 13 page
TRENTOOL : an open source toolbox to estimate neural directed interactions with transfer entropy
To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wienerâs definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle â such as Granger causality â modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems â such as the brain â nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. ..
- âŚ